双向运动规划与其单向对应物相比,平均地减少计划时间。在单次查询可行的运动规划中,使用双向搜索来查找连续运动计划需要前向和反向搜索树之间的边缘连接。这样的树木连接需要解决两点边值问题问题(BVP)。然而,两点BVP解决方案可能是困难的或不可能计算许多系统。我们提出了一种新的双向搜索策略,不需要解决两点BVP。反向树的成本信息而不是直接连接前向和反向树木,而是用作前向搜索的指导启发式。这使得前向搜索能够快速收敛到可行的解决方案而不解决两点BVP。我们提出了两个新的算法(GBRRT和GABRRT),使用此策略并使用多种动态系统和现实世界硬件实验运行多个软件模拟,以表明我们的算法表现出对现有最先进的方法进行的或更好在快速找到初始可行的解决方案时。
translated by 谷歌翻译
Learning efficient and interpretable policies has been a challenging task in reinforcement learning (RL), particularly in the visual RL setting with complex scenes. While neural networks have achieved competitive performance, the resulting policies are often over-parameterized black boxes that are difficult to interpret and deploy efficiently. More recent symbolic RL frameworks have shown that high-level domain-specific programming logic can be designed to handle both policy learning and symbolic planning. However, these approaches rely on coded primitives with little feature learning, and when applied to high-dimensional visual scenes, they can suffer from scalability issues and perform poorly when images have complex object interactions. To address these challenges, we propose \textit{Differentiable Symbolic Expression Search} (DiffSES), a novel symbolic learning approach that discovers discrete symbolic policies using partially differentiable optimization. By using object-level abstractions instead of raw pixel-level inputs, DiffSES is able to leverage the simplicity and scalability advantages of symbolic expressions, while also incorporating the strengths of neural networks for feature learning and optimization. Our experiments demonstrate that DiffSES is able to generate symbolic policies that are simpler and more and scalable than state-of-the-art symbolic RL methods, with a reduced amount of symbolic prior knowledge.
translated by 谷歌翻译
Current image generation models struggle to reliably produce well-formed visual text. In this paper, we investigate a key contributing factor: popular text-to-image models lack character-level input features, making it much harder to predict a word's visual makeup as a series of glyphs. To quantify the extent of this effect, we conduct a series of controlled experiments comparing character-aware vs. character-blind text encoders. In the text-only domain, we find that character-aware models provide large gains on a novel spelling task (WikiSpell). Transferring these learnings onto the visual domain, we train a suite of image generation models, and show that character-aware variants outperform their character-blind counterparts across a range of novel text rendering tasks (our DrawText benchmark). Our models set a much higher state-of-the-art on visual spelling, with 30+ point accuracy gains over competitors on rare words, despite training on far fewer examples.
translated by 谷歌翻译
Multi-Task Learning (MTL) has shown its importance at user products for fast training, data efficiency, reduced overfitting etc. MTL achieves it by sharing the network parameters and training a network for multiple tasks simultaneously. However, MTL does not provide the solution, if each task needs training from a different dataset. In order to solve the stated problem, we have proposed an architecture named TreeDNN along with it's training methodology. TreeDNN helps in training the model with multiple datasets simultaneously, where each branch of the tree may need a different training dataset. We have shown in the results that TreeDNN provides competitive performance with the advantage of reduced ROM requirement for parameter storage and increased responsiveness of the system by loading only specific branch at inference time.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
We consider the problem of multi-agent navigation and collision avoidance when observations are limited to the local neighborhood of each agent. We propose InforMARL, a novel architecture for multi-agent reinforcement learning (MARL) which uses local information intelligently to compute paths for all the agents in a decentralized manner. Specifically, InforMARL aggregates information about the local neighborhood of agents for both the actor and the critic using a graph neural network and can be used in conjunction with any standard MARL algorithm. We show that (1) in training, InforMARL has better sample efficiency and performance than baseline approaches, despite using less information, and (2) in testing, it scales well to environments with arbitrary numbers of agents and obstacles.
translated by 谷歌翻译
Finetuning language models on a collection of datasets phrased as instructions has been shown to improve model performance and generalization to unseen tasks. In this paper we explore instruction finetuning with a particular focus on (1) scaling the number of tasks, (2) scaling the model size, and (3) finetuning on chain-of-thought data. We find that instruction finetuning with the above aspects dramatically improves performance on a variety of model classes (PaLM, T5, U-PaLM), prompting setups (zero-shot, few-shot, CoT), and evaluation benchmarks (MMLU, BBH, TyDiQA, MGSM, open-ended generation). For instance, Flan-PaLM 540B instruction-finetuned on 1.8K tasks outperforms PALM 540B by a large margin (+9.4% on average). Flan-PaLM 540B achieves state-of-the-art performance on several benchmarks, such as 75.2% on five-shot MMLU. We also publicly release Flan-T5 checkpoints, which achieve strong few-shot performance even compared to much larger models, such as PaLM 62B. Overall, instruction finetuning is a general method for improving the performance and usability of pretrained language models.
translated by 谷歌翻译
产品的属性值是任何电子商务平台中必不可少的组件。属性值提取(AVE)涉及从其标题或描述中提取产品的属性及其值。在本文中,我们建议使用生成框架解决AVE任务。我们通过将AVE任务作为生成问题制定,即基于单词序列和基于位置的生成范式,即基于单词序列和位置序列。我们在两个数据集上进行实验,在该数据集中生成方法获得了新的最新结果。这表明我们可以将建议的框架用于AVE任务,而无需其他标记或特定于任务的模型设计。
translated by 谷歌翻译
激光间质热疗法(LITT)是一种新型的微创治疗方法,用于烧蚀颅内结构,以治疗肠内颞叶癫痫(MTLE)。 LITT之前和之后的感兴趣区域(ROI)分割将使自动化病变定量能够客观地评估治疗疗效。深度学习技术,例如卷积神经网络(CNN)是ROI分割的最新解决方案,但在培训过程中需要大量注释的数据。但是,从LITT等新兴治疗中收集大型数据集是不切实际的。在本文中,我们提出了一个进行性脑部病变合成框架(PAVAE),以扩大训练数据集的数量和多样性。具体而言,我们的框架由两个顺序网络组成:掩模合成网络和掩模引导的病变合成网络。为了更好地利用外部信息来在网络培训期间提供额外的监督,我们设计了条件嵌入块(CEB)和掩模嵌入块(MEB),以将掩模的固有条件编码到功能空间中。最后,使用原始和合成病变图像对分割网络进行训练,以评估所提出的框架的有效性。实验结果表明,我们的方法可以实现逼真的合成结果,并在传统数据增强技术之上提高下游分割任务的性能。
translated by 谷歌翻译
基于方面的情感分析(ABSA)涉及审查句子对给定方面的情感极性的识别。 RNN,LSTM和GRU等深度学习顺序模型是推断情感极性的当前最新方法。这些方法可以很好地捕获评论句子的单词之间的上下文关系。但是,这些方法在捕获长期依赖性方面微不足道。注意机制仅专注于句子的最关键部分,从而发挥着重要作用。在ABSA的情况下,方面位置起着至关重要的作用。在确定对该方面的情绪的同时,近乎方面的单词会做出更多的贡献。因此,我们提出了一种使用依赖解析树捕获基于位置信息的方法,并有助于注意机制。使用这种类型的位置信息通过简单的基于单词距离的位置增强了深度学习模型的性能。我们对Semeval'14数据集进行了实验,以证明基于ABSA的基于ABS的依赖关系的效果。
translated by 谷歌翻译